Talk by Jacob Yates of the University of Maryland, Department of Biology. Given to the Redwood Center for Theoretical Neuroscience at UC Berkeley.
Most of the core computational concepts in visual neuroscience come from studies using anesthetized or fixating subjects. Although this approach is designed to maximize experimental control – by stabilizing the subject’s gaze – the net result is that the most commonly used visual stimulus is a fixation point and little is known about how active visual behavior shapes the encoding process throughout the visual hierarchy, especially at the center of gaze (the fovea). Here, we combine high-resolution eye tracking, large-scale neurophysiology, and advanced statistical models to study neural processing at the fovea during natural visual behavior in primary visual cortex (V1) of marmoset monkeys. Using a digital Dual-Purkinje eye tracker (dDPI) that we recently developed for use with marmosets, we can measure the gaze position with unprecedented precision. We record from multiple laminar electrode arrays that are semi-chronically implanted in the foveal representation in V1 while marmosets freely view large visual stimuli or search for small Gabor targets positioned randomly in the visual field. After correcting for the eye position offline, we reconstruct the retinal input for the neurons understudy, which then forms the input for likelihood-based neural models. Using this approach, we can recover receptive-field subunits of foveal V1 neurons in freely-viewing marmosets that are 1/10 of one degree of visual angle. We further generalize this approach to higher visual areas (MT), demonstrating that free-viewing generates large amounts of data in minimally-trained animals with no loss of detail or rigor. Our statistical modeling approach additionally allows us to account for shared extra-retinal modulations of the neural population while simultaneously characterizing the response to the visual input. This approach opens the ability to study visual responses at high resolution during natural vision and circumvents the training time involved in standard paradigms.